Lower bounds on kernelization
نویسندگان
چکیده
Preprocessing (data reduction or kernelization) to reduce instance size is one of the most commonly deployed heuristics in the implementation practice to tackle computationally hard problems. However, a systematic theoretical study of them remained elusive so far. One of the reasons for this is that if an input to an NP -hard problem can be processed in polynomial time to an equivalent one of smaller size in general, then the preprocessing algorithm can be used to actually solve the problem in polynomial time proving P = NP , which is expected to be unlikely. However the situation regarding systematic study changed drastically with the advent of parameterized complexity. Parameterized complexity provides a natural framework to analyse preprocessing algorithms. In a parameterized problem, every instance x comes with a positive integer, or parameter, k. The problem is said to admit a kernel if, in polynomial time, we can reduce the size of the instance x to a function in k, while preserving the answer. The central notion in parameterized complexity is fixed parameter tractability (FPT), which is the notion of solvability in f(k) · p(|x|) time for any given instance (x, k), where f is an arbitrary function of the parameter k and p is a polynomial in the input size |x|. It is folklore that a parameterized problem Π is fixed-parameter tractable if and only if there exists a computable function g(k) such that Π admits a kernel of size g(k). However, the kernels obtained by this theoretical result are usually of exponential (or even worse) sizes, while problem-specific data reductions often achieve quadraticor even linear-size kernels. So a natural question for any concrete FPT problem is whether it admits polynomial time kernelization to a problem kernel that in the worst case is bounded by a polynomial function of the parameter. Despite several attempts, there are fixed-parameter tractable problems that have only exponential sized kernels. An explanation was provided in a paper by Bodlaender et al. [BDFH09] where it was shown that unless coNP ⊆ NP/poly, there are fixed-parameter tractable problems that cannot have a polynomial sized kernel. This triggered further work on showing lower bounds of kernels, and this article surveys recent developments in the area starting from the framework developed in the paper by Bodlaender et al. [BDFH09].
منابع مشابه
Weak compositions and their applications to polynomial lower bounds for kernelization
We introduce a new form of composition called weak composition that allows us to obtain polynomial kernelization lower-bounds for several natural parameterized problems. Let d ≥ 2 be some constant and let L1, L2 ⊆ {0, 1}∗ × N be two parameterized problems where the unparameterized version of L1 is NP-hard. Assuming coNP 6⊆ NP/poly, our framework essentially states that composing t L1-instances ...
متن کاملKernel: Lower and Upper Bounds
Through this lecture note we try to provide a portal into the emerging filed of kernelization. We exhibit through examples various tools to prove both lower and upper bounds on the kernel sizes.
متن کاملCross-Composition: A New Technique for Kernelization Lower Bounds
We introduce a new technique for proving kernelization lower bounds, called cross-composition. A classical problem L cross-composes into a parameterized problem Q if an instance of Q with polynomially bounded parameter value can express the logical OR of a sequence of instances of L. Building on work by Bodlaender et al. (ICALP 2008) and using a result by Fortnow and Santhanam (STOC 2008) we sh...
متن کاملRecent developments in kernelization: A survey
Kernelization is a formalization of efficient preprocessing, aimed mainly at combinatorially hard problems. Empirically, preprocessing is highly successful in practice, e.g., in state-of-the-art SAT and ILP solvers. The notion of kernelization from parameterized complexity makes it possible to rigorously prove upper and lower bounds on, e.g., the maximum output size of a preprocessing in terms ...
متن کاملKernelization, Generation of Bounds, and the Scope of Incremental Computation for Weighted Constraint Satisfaction Problems
In this paper, we present an algorithmic framework for kernelization of combinatorial problems posed as weighted constraint satisfaction problems (WCSPs). Our kernelization technique employs a polynomial-time maxflow-based algorithm to fix the optimal values of a subset of the variables in a preprocessing phase. It thereby reduces the set of variables for which exhaustive search is eventually r...
متن کاملKernelization Lower Bounds By Cross-Composition
We introduce the cross-composition framework for proving kernelization lower bounds. A classical problem L and/or-cross-composes into a parameterized problem Q if it is possible to efficiently construct an instance of Q with polynomially bounded parameter value that expresses the logical and or or of a sequence of instances of L. Building on work by Bodlaender et al. (ICALP 2008) and using a re...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید
ثبت ناماگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید
ورودعنوان ژورنال:
- Discrete Optimization
دوره 8 شماره
صفحات -
تاریخ انتشار 2011